Search Results: "bruce"

18 December 2012

Russ Allbery: Review: Liars and Outliers

Review: Liars and Outliers, by Bruce Schneier
Publisher: John Wiley
Copyright: 2012
ISBN: 1-118-14330-2
Format: Hardcover
Pages: 285
One of the perils of buying a book written by a blogger one reads regularly is that the book may be little more than a rehashing of their blog, with insufficient original material to warrant the time investment. Sometimes it's still nice to support them financially, but it may not make sense to read the book. I've been following Schneier's blog for years (as should anyone with an interest in security), including through the entire process of writing Liars and Outliers, and was a bit worried that might be the case here. Thankfully, I can reassure any other worried potential readers that is not the case. This is substantial new material establishing a firm framework for thinking about incentives and controls in any society or organization. Liars and Outliers talks about security mechanisms, but it's not, at its core, a book about security. Rather, it's a book about incentives, order, and how order is established. It's a comprehensive reductionist analysis of how societies create predictability and compliance to allow us to trust other people who we have never met before and will never meet again. It's a unique (at least in my experience) combination of anthropology, sociology, security analysis, and political science. Schneier cuts across fields in an idiosyncratic but illuminating way that reminded me of (on an entirely separate topic) Jane Jacobs. This is not a prescriptive book, nor is a collection of answers, solutions, or even deep analysis of particular problems. Rather, it's an attempt to construct a general framework for analyzing societal dilemmas: conflicts between individual desires and social good, how those conflicts are resolved, and how societies can weigh the scales and influence the statistical outcomes. The closest Schneier comes to telling the reader how to solve problems is a checklist, at the end of the book, for designing effective societal pressures. Its primary contribution is vocabulary and structure. It also passes one of my litmus tests for any book about human behavior: Schneier complicates, broadens, deepens, and expands understanding, and points out complex interactions and complex feedback in effects we're inclined to consider simple, rather than simplifying or eliminating human complexity. One point I found refreshing about this book is that Schneier is scrupulous in refusing to define either society or individuals as good or bad, to the point of carefully defining terminology used in all of the social dilemmas. Following societal rules is called compliance; not following those rules is called defection. In some cases, defection is morally correct (Schneier's most frequent example is in the civil rights struggles of the 1960s in the United States). In all cases, social pressures are tools, which can be used to encourage compliance with moral or immoral systems, and which are deployed by totalitarian dictatorships and utopian communes alike. Schneier explicitly puts out of scope for this book the questions of how societal goals should be determined, how they change, and whether any given societal rule or interest is moral or immoral. He focuses, rather, only on the mechanisms, with a primary goal of informing and deepening debates over how best to encourage behavior that societies want to encourage and discourage behavior societies want to discourage. He also emphasizes discourage, as opposed to eliminate. Early on, Schneier shows some of the results of game theory, as well as basic common sense, that indicate that no healthy society can totally eliminate defection. Not only would that stifle valuable and important reform, such as changes in civil rights, but the degree of pressure required is immense. Defectors are natural and will always exist, and are sometimes valuable and necessary. Rather, the goal of a society is to reduce defectors to a level where most people can ignore their existence most of the time, a state that leads to the level of risk and trust required to have a functioning and healthy society. The word society, similarly, is intentionally broad, and can refer to just about any collection of people, from a circle of friends to a corporation, institution, or country. However, as Schneier points out early in the book, small societies rarely need much in the way of formal pressure and appear almost magically self-governing. That's the first property that he disassembles, resulting in a general classification of societal pressures into four categories: moral, reputational, institutional, and security systems. The last is an odd category that's partly orthagonal to the other three. Moral pressure is internalized conscience: the normal tendency of nearly everyone to follow their own moral code, a code that's at least partly constructed and certainly heavily influenced by the surrounding society. Reputational pressure is, in a sense, externalized morality: it's the informal reactions of others around one to one's past actions. Included in reputational pressure is shunning of every kind, from cutting off a friendship to boycotts against corporations, but it also includes confrontation from another member of one's society, the more subtle effects of our individual desires to be liked and respected, and all the various aspects of "face", honor, and respect within a community. Small communities are frequently self-governing, in Schneier's model, because they're small enough that moral and reputational pressures are sufficient and no other pressures are required. We're so used to applying moral and reputational pressure to other humans almost unconsciously that we sometimes don't even notice its existence, leading to that "magical" self-governing property. But Schneier puts pressures in a sequence: pressures that work extremely well with small groups often don't scale. Moral pressure works best with small groups and reputational pressure with somewhat larger groups, but when societies scale beyond the limits of reputational pressure when, for example, one frequently interacts with people whose reputations are unknown to you and whose subsequent opinion of you will not be relevant institutional pressure is required to force compliance. Institutional pressure is the sort of pressure that we all tend to think of first when we look for ways to enforce rules: laws, policies, contracts, and other codes of behavior that carry with them formal punishments and some enforcement mechanism. But even in societies so large that institutional pressures are frequently required, such as whole countries, moral and reputational pressures still exist and are extremely important. One of Schneier's most interesting points is his analysis of how institutional pressures can paradoxically undermine reputational and moral pressures, resulting in more defection than if the institutional pressure hadn't existed. This is just the basic framework of Schneier's analysis, hopefully giving you a feel for the structure of the book. He goes much deeper into the complicated interactions between the various levels of pressure, and then dives into an extensive look at competing societal dilemmas: cases where there is more than one society in play simultaneously, possibly demanding contradictory actions. Liars and Outliers also includes a wonderful analysis of organizational entities within the same framework, including their much-different reactions to moral, reputational, and institutional pressures. One of the most cogent analyses of the difficulties of regulating both corporations and governmental institutions falls out of that analysis, once one looks at them in light of Schneier's basic framework. Pressures quickly become complex and multi-layered, and human reactions to pressures are frequently counter-intuitive. Schneier draws extensively on game theory to show that some counter-intuitive responses are actually emergent properties of logical analysis of the situation, but that others are more uniquely human and have little or nothing to do with a mathematical cost-benefit analysis. I haven't even mentioned his discussion of security systems, and how they can extend moral, reputational, and institutional pressures, as well as add a new type of pressure (making a defecting action impossible) that scales even better than institutional pressures. Liars and Outliers has all of the supporting infrastructure you would expect in a scholarly book: notes, extensive references, and a good index. I suspect it will end up being used as at least additional reading in college classes. The notes are, unfortunately, end notes, making the full context of the book much harder to read than was necessary, but at least Schneier does separate the notes from the references so that one doesn't chase notes for further explanation and find a simple citation. As with any book like this, one always wishes it could end in a simple prescription to fix everything, but of course it doesn't. But that's also a measure of a good scholarly work. Human and organizational motivations are complex and tricky, and any framework for analyzing them needs to be able to represent that complexity. Schneier here has constructed a very powerful one, one that I started using in discussions before I'd even finished the book. Perhaps the most valuable contribution of Liars and Outliers to public discussion is clear terminology and categories, which can be of great help in finding the core components of a problem. Liars and Outliers can be slow going, particularly early on when Schneier is still defining terms and setting up the background of his analysis. One can get a bit tired of the analysis matrices of societal dilemmas. But stick with it through the groundwork, since the analyses of competing societal dilemmas and of the impact of societal pressures on organizations are exceptional. Highly recommended, particularly for anyone who is designing or implementing societal pressures: managers, political activists, or anyone in a security-related field. Rating: 8 out of 10

21 November 2012

Erich Schubert: Phoronix GNOME user survey

While not everybody likes Phoronix (common complaints include tabloid journalism), they are doing a GNOME user survey again this year. If you are concerned about Linux on the desktop, you might want to participate; it is not particularly long.
Unfortunately, "the GNOME Foundation still isn't interested in having a user survey", and may again ignore the results; and already last year you could see a lot of articles along the lines of The Survey That GNOME Would Rather Ignore. One more reason to fill it out.

31 May 2012

Russell Coker: Links May 2012

Vijay Kumar gave an interesting TED talk about autonomous UAVs [1]. His research is based on helicopters with 4 sets of blades and his group has developed software to allow them to develop maps, fly in formation, and more. Hadiyah wrote an interesting post about networking at TED 2012 [2]. It seems that giving every delegate the opportunity to have their bio posted is a good conference feature that others could copy. Bruce Schneier wrote a good summary of the harm that post-911 airport security has caused [3]. Chris Neugebauer wrote an insightful post about the drinking culture in conferences, how it excludes people and distracts everyone from the educational purpose of the conference [4]. Matthew Wright wrote an informative article for Beyond Zero Emissions comparing current options for renewable power with the unproven plans for new nuclear and fossil fuel power plants [5]. The Free Universal Construction Kit is a set of design files to allow 3D printing of connectors between different types of construction kits (Lego, Fischer Technic, etc) [6]. Jay Bradner gave an interesting TED talk about the use of Open Source principles in cancer research [7]. He described his research into drugs which block cancer by converting certain types of cancer cell into normal cells and how he shared that research to allow the drugs to be developed for clinical use as fast as possible. Christopher Priest wrote an epic blog post roasting everyone currently associated with the Arthur C. Clarke awards, he took particular care to flame Charles Stross who celebrated The Prestige of such a great flaming by releasing a t-shirt [8]. For a while I ve been hoping that an author like Charles Stross would manage to make more money from t-shirt sales than from book sales, Charles is already publishing some of his work for free on the Internet and it would be good if he could publish it all for free. Erich Schubert wrote an interesting post about the utility and evolution of Favebook likes [9]. Richard Hartmann wrote an interesting summary of the problems with Google products that annoy him the most [10]. Sam Varghese wrote an insightful article about the political situation in China [11]. The part about the downside of allowing poorly educated people to vote seems to apply to the US as well. Sociological Images has an article about the increased rate of Autism diagnosis as social contagion [12]. People who get their children diagnosed encourage others with similar children to do the same. Vivek wrote a great little post about setting up WPA on Debian [13]. It was much easier than expected once I followed that post. Of course I probably could have read the documentation for ifupdown, but who reads docs when Google is available? Related posts:
  1. Links March 2012 Washington s Blog has an informative summary of recent articles about...
  2. Links April 2012 Karen Tse gave an interesting TED talk about how to...
  3. Links February 2012 Sociological Images has an interesting article about the attempts to...

5 April 2012

Petter Reinholdtsen: Debian Edu in the Linux Weekly News

About two weeks ago, I was interviewed via email about Debian Edu and Skolelinux by Bruce Byfield in Linux Weekly News. The result was made public for non-subscribers today. I am pleased to see liked our Linux solution for schools. Check out his article Debian Edu/Skolelinux: A distribution for education if you want to learn more.

28 January 2012

Russ Allbery: Review: Effective Java, Second Edition

Review: Effective Java, Second Edition, by Joshua Bloch
Publisher: Addison-Wesley
Copyright: 2008
Printing: October 2009
ISBN: 0-321-35668-3
Format: Trade paperback
Pages: 319
Guy Steele, in the forward to this book, writes:
Programs, unlike spoken sentences and unlike most books and magazines, are likely to be changed over time. It's typically not enough to produce code that operates effectively and is readily understood by other persons; one must also organize the code so that it is easy to modify. There may be ten ways to write code for some task T. Of those ten ways, seven will be awkward, inefficient, or puzzling. Of the other three, which is most likely to be similar to the code needed for the task T in next year's software release?
Teaching this, as well as how to avoid being awkward, inefficient, or puzzling, is the goal of this book. Clearly communicating effective, idiomatic, and maintainable usage to a newcomer to a programming language is one of the hardest types of programming books to write. Books like this are therefore quite scarce. Most introductory texts do try to communicate some degree of basic usage, but they rarely go far beyond the syntax, and when they do that usage is rarely both well-defended and inobvious. Bloch takes the concept quite far indeed, going deep not only into the Java language but also into object-oriented software construction in general. Effective Java is modeled after Effective C++ by Scott Meyers, a book I've not read (due to the lack of need for C++ in my programming life), but which I've heard a great deal about. This means the book is organized into 78 numbered items, each of which provides specific advice and analysis about one area of Java. Examples include item 16, "Favor composition over inheritance," or item 33, "Use EnumMap instead of ordinal indexing." As you can see, they run the gamut from high-level design principles to specific coding techniques. This sort of book demands a lot of the author. Everyone has a coding style, and everyone can make usage recommendations, but the merits or lack thereof of specific recommendations are often only visible with substantial later experience. More than any other type of programming language book, this sort of usage guide must be written by a language expert with years of experience with both good and bad code in the language. This is where Effective Java shines. Joshua Bloch led the design and implementation of significant portions of the Java core libraries at Sun and is currently the chief Java architect at Google, but even without knowing that background, his expertise is obvious. Every item in this book is backed up with specific examples and justification, and Bloch quotes extensively from the Java core library to illustrate both the advantages of the techniques he describes and the problems that result when they're not followed. This is not an introductory book, which is one of the things that makes it so efficient and concise. It's a book aimed at the intermediate or advanced Java programmer and assumes you already know the language and the basic pitfalls. There are only a few items in here that would be obvious to most experienced programmers, and even there Bloch ties them back to specific issues in Java in ways that are illuminating. I would not have expected to learn something new from a chapter on a hoary old problem like avoiding float and double for precise values, but I did: Bloch discusses the available alternatives within Java and their tradeoffs and then makes useful specific recommendations. If, like me, you're an experienced programmer already but relatively new to Java, you still should not read this book first. You need a general introduction to the language and libraries and a few projects under your belt before you can appreciate it. (I personally started with Thinking in Java by Bruce Eckel and it served me well, although on several points of style Bloch disagrees with advice in Eckel's book, and I find Bloch's arguments convincing.) But I think this is one of the best possible choices for your second book on Java, in large part because Bloch will head off bad design and style decisions that you don't realize you're making and catch them before they become entrenched. I'm glad I read it as soon as I knew the language well enough to absorb it, and it's the sort of book that I'm likely to re-read sections of whenever I work on Java code related to those topics. It's not entirely obvious that you should take my advice about this sort of book, since I'm not a Java expert and don't have those years of experience with it. But I've checked the recommendation with other programmers I know who are experts, and I've never heard anything but praise for it. It's also one of the books recommended in Coders at Work, and Bloch is one of the people interviewed there, which carries a lot of weight with me. And, apart from that, any long-time programmer who cares about their craft builds an internal sense of aesthetics around what a well-written program should look like and finds themselves recognizing a similar sense in other people's code, even in languages with which they're not familiar. Bloch's recommendations and analysis feel right; one can immediately see how they improve maintainability and robustness, and some of the techniques he shows are elegant and beautiful. This is not a general programming book. It's specifically focused on the Java language, and much of it deals with specific suggestions on how to use Java's core libraries and language features. If you're not going to be writing code in Java, I can't really recommend it. But one of the things I loved about it is that, while talking about Java, Bloch also talks about object-oriented software construction: techniques for extending foreign libraries one does not control, API design, and the proper use of inheritance, among other topics.. That advice is some of the best object-oriented software design advice I've ever read. There isn't enough of it to recommend the book to people with no interest in Java, but this book has even made my C and Perl code better, and has helped me grasp the tradeoff between inheritance and composition in a deeper way than I ever had before. It's a lovely side bonus. If you're writing Java, read this book. If you're learning Java, don't read it first, but read it second, and more than once, or alongside a project where you can apply the advice. It's dense and efficient in the information that it conveys, which means there's more in a couple pages here than in thirty or forty pages of some of the sprawling introductory programming books. I did read it cover to cover, which is one of the better ways to get a sense of Bloch's more general advice on software construction, but you'll hit information overload and will want to return to it piecemeal to fully absorb it. And do get the second edition. I'm sure the first edition is available cheap used, but the additions of enums and generics to the Java language are hugely important and provide some of the most elegant and graceful techniques in the book. Rating: 9 out of 10

13 January 2012

Raphaël Hertzog: People Behind Debian: Steve McIntyre, debian-cd maintainer, former Debian Project Leader

Steve McIntyre has been contributing to Debian since 1996, 2 years before I joined! But I quickly stumbled upon Steve: in 1999, he was struggling with getting his debian-cd script to produce 2 ISO images (it was the first time that Debian did no longer fit on a single CD), I helped him by rewriting debian-cd with a robust system to split packages on as many ISO images as required. I remember those times very well because Steve was very supportive of my efforts and it was a real pleasure to get this done. His friendly nature probably also explains why he got elected Debian Project Leader twice! Anyway, enough history, check out his interview to learn more about the great work he s doing nowadays. My questions are in bold, the rest is by Steve. Raphael: Who are you? Steve: I m a professional software engineer, 37, living in Cambridge (England) with my new wife Jo. I studied for the EIST degree at the University of Cambridge, then (like many people here, it seems) I just forgot to go home again afterwards and settled here. I spent more of my study time playing with Linux than working on my degree, so I guess I m lucky that it worked and I found a career in that area! Raphael: How did you start contributing to Debian? Steve: During my time in college, I started hacking on software in my free time, using Slackware as my first Linux distribution from the middle of 1994. After encountering more and more problems with Slackware, I was encouraged by a number of friends to make the jump over to Debian and in October 1996 I did. The installation process back then was much harder than anything people see today, but after a long weekend I finally had my Debian system up and running. I was already one of the main upstream developers for the Mikmod music player at that time, so that very same weekend I applied to be a DD so I could maintain it in Debian too. Back then, the NM process was much simpler: I just mailed a key to Bruce and he set me up with an account almost immediately! I then found that Joey Hess had beaten me to it and already packaged Mikmod. Grrr! :-) Raphael: What s your biggest achievement within Debian? Steve: Without a doubt, my proudest achievement within Debian is being elected Project Leader for 2 years by the other developers. It s a great feeling to have earned the trust of your friends and peers, and also a great responsibility to go and help Debian where needed: talking to the press about Debian, assisting wherever problems crop up, etc. The DPL job is certainly a lot of hard work, and I have nothing but respect for anybody who volunteers for it.
It s a great feeling to have earned the trust of your friends and peers.
Elsewhere, I ve been leading the Debian CD team for years too, both doing most of the maintenance of the debian-cd package and producing and testing the regular installation CDs and DVDs that we ship to the world. Again, this is a time-consuming job but it needs doing and it s worthwhile. Raphael: You re currently employed by ARM. What are you working on and are they supportive of your Debian involvement? Steve: The situation within ARM is very interesting; I m employed in PDSW (Processor Division, SoftWare), a new group founded just a couple of years back to help improve the state of software on ARM. Most of the people in the group are working on Free Software at this stage (e.g. toolchains, browsers, Linux kernel), which is lovely. Some of the engineers have also been seconded into a new non-profit company Linaro, which is a collaboration between ARM and a number of other companies investing in core Linux software and tools for ARM-based CPUs. I m one of the ARM engineers in Linaro, and I m a Technical Architect in the Office of the CTO. My role includes looking at future projects for Linaro to help with (e.g. ARM servers), but for the last few months I ve been concentrating on the new armhf architecture in Debian, Ubuntu and elsewhere. armhf is a new architecture in Debian and Ubuntu terms, but it s not strictly a new type of hardware. Instead, it s a new ABI. We have two reasons for doing this work:
  1. It targets the latest version of 32-bit ARM CPUs (v7) and makes better use of the hardware, for better performance. Compare targetting i686 instead of i386, for example. We ll still support the older armel port for the foreseeable future for users with older hardware that can t run armhf.
  2. More importantly: we are standardising on the ABI / compiler options / hardware support for future users.
In the past, there has been a huge amount of specialisation (aka fragmentation) in the ARM Linux environment, and that worked OK for specialised devices that only ever ran the software shipped with them. ARM CPUs are now becoming more and more mainstream, so people will expect to be able to install generic software on their machines. That gives a requirement for a standard base platform, and armhf (arm-linux-gnueabihf in GNU triplet terms) is that standard that we are pushing in the community. Debian, Ubuntu, Fedora, Suse and others are all going to use this, making compatibility possible. I ve been working with a small team of people to make armhf happen, helping where needed: putting together build machines; patching Debian packages directly; discussing and fixing toolchain issues with Ubuntu folks; agreeing ABI specifications with people from Fedora; advising people from other distros bootstrapping their new ARM ports. ARM and Linaro are very supportive of this work, and it s been lovely being sponsored to work directly on Free Software like this. It s work that will directly benefit ARM and its partners (of course!), but it s also helping out more generally too: Debian QA work, cross-build support, bootstrapping efforts, multi-arch. More and more of the ARM market is driven by Free Software, and companies are acknowledging that. I should probably also mention that we re hiring ! :-) Raphael: What are your plans for Debian Wheezy? Steve: There are three main tracks here. Obviously, I m interested in seeing armhf release with Wheezy. We ve just been added to Testing last weekend, so that s going well. We ve got over 90% of the archive built now, and we re mopping up the remaining issues. I m the primary maintainer of cdrkit at this point, but I d prefer to have it go away. Xorriso and the associated software in libisoburn is almost capable of replacing all the aging cdrtools-derived software that we have in Debian, The only missing feature that I m aware of is creating the HFS hybrid filesystems that we use for installations on Mac systems. I ve been talking with the upstream folks about this for some time already, and I m hoping we can finish this soon enough that we can get it into Wheezy. Finally, I ve got the ever-growing wishlist of things for debian-cd. We ve got the beginnings of an automated test suite that Mart n Ferrari has written, but it needs integrating and improving. I want to help get regular weekly/daily/release debian-live builds running on the main CD build machine. There s work needed if we want to make good installation media for the new multi-arch world, too. The Emdebian people are asking for help making CD images The list goes on :-) Raphael: The ARM community seems to be very interested in multi-arch. Can you explain why? Steve: There are a number of reasons for ARM people to be interested in multi-arch; two really stand out for me:
This is potentially the killer app for multi-arch: simply install the libraries for the target architecture [ ], install a simple cross-gcc package [ ] and you re all set.
Raphael: What s the biggest problem of Debian? Steve: For me, Debian s biggest problem has been the same for a long time: we are forever short of enough people to do the work that we re trying to do. That might sound like a weird thing to claim when Debian is one of the largest Free Software projects on the planet, but it s more a statement of just how huge our goals are. Many of the largest things in Debian are developed or controlled by very small teams working very hard, and there s always a risk of losing people due to burnout in those situations.
We are forever short of enough people to do the work that we re trying to do.
Some of the tasks that should be easy given our large membership (e.g. large-scale packaging transitions) can often instead take a very long time. We are fortunate to have more people wanting to join in Debian s work all the time, but we also need to be careful to keep on promoting what we re doing and recruiting new contributors, encouraging them to get more and more involved in core work. Debian gets ever bigger in terms of the size and the number of packages we distribute; we re not currently matching that growth rate elsewhere. Raphael: What motivates you to continue to contribute year after year? Steve: This one is much easier to answer! The thing that first attracted me to Debian was the fact that I could help to develop it, help to decide how things could and should be done within it. Instead of being forced to accept what some corporation decided I could do with my computer, I could change the software to suit my needs and preferences. Alongside that, I could get involved with a strong community of similar people all over the world, all with their own strong opinions about how software should work. I joined in and found it was great fun and very rewarding. That hasn t changed for me in the intervening years, and that s why I m still around. I work on Debian because it helps me to get the OS that I want to use. It seems that lots of people around the world find it useful too, and that s awesome. :-) Raphael: Do you believe that Stefano Zacchiroli will be the first DPL who managed to stay 3 consecutive years on the seat? Would you like him to candidate again? Steve: To be honest, I would be very surprised if Zack stood again for DPL this year. He told me himself that he wasn t planning on it, and I can understand that decision. He s been an awesome DPL in my opinion, and I m glad that he took the job. But: it is also a very difficult and time-consuming task that would be enough to wear down anybody. If Zack does decide to stand again, I would support him 100%. But I know that we also have lots of other good people in Debian who would be ready to take up the challenge next. Raphael: Is there someone in Debian that you admire for their contributions? Steve: There are lots of people I admire in Debian, so many so that I almost don t want to list individuals here for fear of missing people out. But :-) Bdale Garbee has been an inspiration to many of us, for many years. He s technically excellent, a great friend to many of us, an endless source of sage advice and (last but not least) he has some wonderful stories to tell about his experiences over the years. On top of that, he s just cool. :-) Christian Perrier is another exceptional developer, in my eyes he s great at co-ordinating people in translations, working tirelessly to make this very important part of Debian work better and better with every release. He s also a really nice guy and we all love him. I also have to mention Joey Hess here, whether he likes it or not. *grin* He s been responsible for so many good things in Debian over the years, even if he did steal my first package Finally, the teams of people who make sure that Debian is always working: the security team and DSA. The rest of us can choose to take time off from Debian to go and do other things, but these people need to cover things every day. That s a major responsibility, and I salute them for taking on that challenge.
Thank you to Steve for the time spent answering my questions. I hope you enjoyed reading his answers as I did. Note that you can find older interviews on http://wiki.debian.org/PeopleBehindDebian.

Subscribe to my newsletter to get my monthly summary of the Debian/Ubuntu news and to not miss further interviews. You can also follow along on Identi.ca, Google+, Twitter and Facebook.

One comment Liked this article? Click here. My blog is Flattr-enabled.

6 October 2011

Craig Small: @ 0 28

It doesn t look so old in hex; Zero, x, Two, Eight but I finally got there. So on this day, what other landmarks am I up to? Some people get a little sad hitting this age, but it really is only a number, wether it is 0 28, \050 or even 40. As the saying goes: only the dead don t age.

24 September 2011

Rémi Vanicat: On security for closed source software

Thanks to Bruce Schneier security blog, I come across an interesting article about liability and software. The problem is well known Of course for better security, the solution could be to not use proprietary software, still a law as proposed on ACM could be useful to protect madam Michu.

1 August 2011

Iustin Pop: New harddrives WD RE4

I've recently exchanged my old RE3 harddrives (500GB) with new RE4 (non-green power) 2TB harddrives. Here is some data gathered if anyone is interested (I usually look for this kind of information before buying). Smart information:
smartctl 5.41 2011-06-09 r3365 [x86_64-linux-3.0.0-ruru0] (local build)
Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net
=== START OF INFORMATION SECTION ===
Model Family:     Western Digital RE4 Serial ATA
Device Model:     WDC WD2003FYYS-02W0B0
Serial Number:    WD-XXXXXXXXXXXX
LU WWN Device Id: X XXXXXX XXXXXXXXX
Firmware Version: 01.01D01
User Capacity:    2,000,398,934,016 bytes [2.00 TB]
Sector Size:      512 bytes logical/physical
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   8
ATA Standard is:  Exact ATA specification draft version not indicated
Local Time is:    Thu Jul 28 20:31:24 2011 CEST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
General SMART Values:
Offline data collection status:  (0x80) Offline data collection activity
                                        was never started.
                                        Auto Offline Data Collection: Enabled.
Self-test execution status:      (   0) The previous self-test routine completed
                                        without error or no self-test has ever
                                        been run.
Total time to complete Offline
data collection:                (28860) seconds.
Offline data collection
capabilities:                    (0x7b) SMART execute Offline immediate.
                                        Auto Offline data collection on/off support.
                                        Suspend Offline collection upon new
                                        command.
                                        Offline surface scan supported.
                                        Self-test supported.
                                        Conveyance Self-test supported.
                                        Selective Self-test supported.
SMART capabilities:            (0x0003) Saves SMART data before entering
                                        power-saving mode.
                                        Supports SMART auto save timer.
Error logging capability:        (0x01) Error logging supported.
                                        General Purpose Logging supported.
Short self-test routine
recommended polling time:        (   2) minutes.
Extended self-test routine
recommended polling time:        ( 255) minutes.
Conveyance self-test routine
recommended polling time:        (   5) minutes.
SCT capabilities:              (0x303f) SCT Status supported.
                                        SCT Error Recovery Control supported.
                                        SCT Feature Control supported.
                                        SCT Data Table supported.
SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x002f   100   253   051    Pre-fail  Always       -       0
  3 Spin_Up_Time            0x0027   100   253   021    Pre-fail  Always       -       0
  4 Start_Stop_Count        0x0032   100   100   000    Old_age   Always       -       4
  5 Reallocated_Sector_Ct   0x0033   200   200   140    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x002e   100   253   000    Old_age   Always       -       0
  9 Power_On_Hours          0x0032   100   100   000    Old_age   Always       -       0
 10 Spin_Retry_Count        0x0032   100   253   000    Old_age   Always       -       0
 11 Calibration_Retry_Count 0x0032   100   253   000    Old_age   Always       -       0
 12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       3
192 Power-Off_Retract_Count 0x0032   200   200   000    Old_age   Always       -       2
193 Load_Cycle_Count        0x0032   200   200   000    Old_age   Always       -       1
194 Temperature_Celsius     0x0022   120   116   000    Old_age   Always       -       32
196 Reallocated_Event_Count 0x0032   200   200   000    Old_age   Always       -       0
197 Current_Pending_Sector  0x0032   200   200   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0030   100   253   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x0032   200   253   000    Old_age   Always       -       0
200 Multi_Zone_Error_Rate   0x0008   100   253   000    Old_age   Offline      -       0
SMART Error Log Version: 1
No Errors Logged
SMART Self-test log structure revision number 1
No self-tests have been logged.  [To run self-tests, use: smartctl -t]
SMART Selective self-test log data structure revision number 1
     SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
        1        0        0  Not_testing
        2        0        0  Not_testing
        3        0        0  Not_testing
        4        0        0  Not_testing
        5        0        0  Not_testing
Selective self-test flags (0x0):
      After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.
hdparm information:
ATA device, with non-removable media
powers-up in standby; SET FEATURES subcmd spins-up.
        Model Number:       WDC WD2003FYYS-02W0B0
        Serial Number:      WD-XXXXXXXXXXXX
        Firmware Revision:  01.01D01
        Transport:          Serial, SATA 1.0a, SATA II Extensions, SATA Rev 2.5, SATA Rev 2.6
Standards:
        Supported: 8 7 6 5
        Likely used: 8
Configuration:
    Logical         max     current
    cylinders       16383   16383
    heads           16      16
    sectors/track   63      63
    --
    CHS current addressable sectors:   16514064
    LBA    user addressable sectors:  268435455
    LBA48  user addressable sectors: 3907029168
    Logical/Physical Sector size:           512 bytes
    device size with M = 1024*1024:     1907729 MBytes
    device size with M = 1000*1000:     2000398 MBytes (2000 GB)
    cache/buffer size  = unknown
    Nominal Media Rotation Rate: 7200
Capabilities:
    LBA, IORDY(can be disabled)
    Queue depth: 32
    Standby timer values: spec'd by Standard, with device specific minimum
    R/W multiple sector transfer: Max = 16  Current = 0
    Advanced power management level: 128
    Recommended acoustic management value: 128, current value: 254
    DMA: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 udma4 udma5 *udma6
                 Cycle time: min=120ns recommended=120ns
    PIO: pio0 pio1 pio2 pio3 pio4
                 Cycle time: no flow control=120ns  IORDY flow control=120ns
Commands/features:
        Enabled Supported:
           *    SMART feature set
                Security Mode feature set
           *    Power Management feature set
           *    Write cache
           *    Look-ahead
           *    Host Protected Area feature set
           *    WRITE_BUFFER command
           *    READ_BUFFER command
           *    NOP cmd
           *    DOWNLOAD_MICROCODE
           *    Advanced Power Management feature set
           *    Power-Up In Standby feature set
           *    SET_FEATURES required to spinup after power up
                SET_MAX security extension
           *    Automatic Acoustic Management feature set
           *    48-bit Address feature set
           *    Device Configuration Overlay feature set
           *    Mandatory FLUSH_CACHE
           *    FLUSH_CACHE_EXT
           *    SMART error logging
           *    SMART self-test
           *    General Purpose Logging feature set
           *    WRITE_ DMA MULTIPLE _FUA_EXT
           *    64-bit World wide name
           *    IDLE_IMMEDIATE with UNLOAD
           *    WRITE_UNCORRECTABLE_EXT command
           *     READ,WRITE _DMA_EXT_GPL commands
           *    Segmented DOWNLOAD_MICROCODE
           *    Gen1 signaling speed (1.5Gb/s)
           *    Gen2 signaling speed (3.0Gb/s)
           *    Native Command Queueing (NCQ)
           *    Phy event counters
           *    Idle-Unload when NCQ is active
           *    NCQ priority information
           *    DMA Setup Auto-Activate optimization
           *    Software settings preservation
           *    SMART Command Transport (SCT) feature set
           *    SCT Long Sector Access (AC1)
           *    SCT LBA Segment Access (AC2)
           *    SCT Error Recovery Control (AC3)
           *    SCT Features Control (AC4)
           *    SCT Data Tables (AC5)
                unknown 206[12] (vendor specific)
                unknown 206[13] (vendor specific)
Security:
    Master password revision code = 65534
            supported
    not     enabled
    not     locked
    not     frozen
    not     expired: security count
            supported: enhanced erase
    294min for SECURITY ERASE UNIT. 294min for ENHANCED SECURITY ERASE UNIT.
Logical Unit WWN Device Identifier: XXXXXXXXXXXXXXXX
    NAA             : X
    IEEE OUI        : XXXXXX
    Unique ID       : XXXXXXXXX
Checksum: correct
Using fio running on all three drives in read mode results in a nice ~420 MiB/s average throughput at the start of the disk, not bad. This is the zone bandwidth graph: Bandwidth graph and the IOPS graph: IOPS graph The IOPS doesn't compare with either a 10K drive nor any SSD, but the bandwidth is not that bad for a mechanical harddrive.

25 May 2011

Russell Coker: Links May 2011

John W. Dean wrote in insightful series of three articles for Findlaw about Authoritarian Conservatives [1]. In summary there are Authoritarian Followers who follow their leader blindly and Authoritarian Leaders who do whatever it takes to gain and maintain power. The Authoritarian mindset lends itself towards right-wing politics. Mick Ebeling gave an inspiring TED talk about his work developing a system to produce art that is controlled by eye movements [2]. The development work was started to support the quadriplegic graffiti artist TEMPT1. Mick s most noteworthy point is that all the hardware design and software are free so anyone can implement it without asking an insurance company or hospital (this is one of the few occasions when a TED speaker has received a standing ovation during a talk). The Eyewriter.org site has the designs and source which is licensed under the GPL [3]. Morgan Spurlock (who is famous for Supersize Me ) gave an amusing TED talk titled The Greatest TED Talk Ever Sold [4]. He provides some interesting information about the brand sponsorship process and his new movie The Greatest Movie Ever Sold . Ralph Langner gave an interesting TED talk about reverse-engineering the Stuxnet worm and discovering that it was targetted at the Iranian nuclear program [5]. The fact that the Stuxnet environment could be turned to other uses such as disrupting power plants is a great concern, particularly as it has special code to prevent automatic safety systems from activating. Angela Belcher gave an interesting TED talk about using nature to grow batteries [6]. She is evolving and engineering viruses to manufacture parts of batteries and assemble them, the aim is to scale up the process to manufacture batteries for the Prius and other large devices at room temperature with no toxic materials. She is also working on biological methods of splitting water into hydrogen and oxygen which has the obvious potential for fuel-cell power and also solar PV cells. As an aside she mentions giving a copy of the Periodic Table to Barack Obama and he told her that he will look at it periodically . Bruce Schnier gave a good overview of the issues related to human perceptions of security in his TED talk about The Security Mirage [7]. There isn t much new in that for people who have been doing computer work but it s good to have an overview of lots of issues. TED has an interesting interview with Gerry Douglas about his work developing touch-screen computer systems for processing medical data in Malawi [8]. This is worth reading by everyone who is involved in software design, many of the things that he has done go against traditional design methods. Mike Matas gave an interesting demo at TED of the first proper digital book [9]. The book is by Al Gore and is run on the iPad/iPhone platform (hopefully they will have an Android version soon). His company is in the business of licensing software for creating digital books. The demonstration featured a mixture of pictures, video, audio, and maps with the pinch interface to move them around. Dr Sommers of Tufts University wrote an interesting post for Psychology Today titled Why it s Never About Race [10]. It seems that there are lots of patterns of people being treated differently on the basis of race but for every specific case no-one wants to believe that racial bias was involved. The Register has an amusing article about what might have happened if Kate had left Prince William at the altar [11]. Fiorenzo Omenetto gave an interesting TED talk about synthetic silk [12]. He is working on developing artificial fibers and solids based on the same proteins as silk which can be used for storing information (DVDs and holograms), medical implants (which can be re-absorbed into the body and which don t trigger an immune response), and cups among other things. Maybe my next tie will have a no pupae were harmed in the production notice. ;) The CDC has released a guide to preparing for a Zombie apocalypse [13], while it s unlikely that Zombies will attack, the same suggestions will help people prepare for the other medical emergencies that involve the CDC. Salon has an interesting article by Glenn Greenwald who interviewed Benjamin Ferencz about aggressive warfare [14]. Benjamin was a prosecutor for war crimes at Nuremberg after WW2 and compares the US actions since 9-11 with what was deemed to be illegal by the standards of WW2. Eli Pariser gave an interesting TED talk about Online Filter Bubbles [15]. He claims that services such as Facebook and Google should give more of a mixture of results rather than targetting for what people want. The problem with this idea is that presenting links that someone doesn t want to click doesn t do any good. It s not as if the filter bubble effect relies on modern media or can be easily solved. Terry Moore gave a TED talk about how to tie shoelaces [16]. Basically he advocates using a doubly-slipped Reef Knot instead of a doubly-slipped Granny Knot. Now I just need to figure out how to tie a doubly-slipped Reef Knot quickly and reliably. Terry uses this as a mathaphor for other ways in which one might habitually do something in a non-optimal way.

31 March 2011

Russell Coker: Links March 2011

Cory Doctorow wrote an interesting article for The Guardian about Harper-Collins attempts to make self-destructing books [1]. They claim that a traditional book falls apart after being read 26 times (a claim that Cory disputes based on personal experience working at libraries) and want ebooks to be deleted after being borrowed so often. Really the copy-right fascists are jumping the shark here.Socialogical Images has an interesting archive of adverts for supposed treatments for autism, obsessive-compulsive disorder, asperger syndrome, and attention deficit and hyperactivity disorder [2]. The New York University Child Study Center conducted the campaign of fake ransom notes to describe a psychological difference as something that kidnaps a child. The possibility that parents should to some extent learn to adapt to their child s nature rather than fixing them with medication is something that most people can t seem to understand.William Cronon has written an interesting analysis of the way Conservative , lobby groups work [3]. They are more organised than I expected.The Reid Report has a good summary of some of the corporat issues related to the Japanese nuclear melt-down [4], apparently the company that runs the reactors decided to delay using sea-water in the hope that their investment could be salvaged and thus put everyone at increased risk. I think that this proves that reactors shouldn t be privately owned.Ian Lowe wrote a good summary of the reasons why Australia should not be using nuclear power when we believed that the Fukushima disaster was over [5]. But it turns out that the Fukushima problems were worse than we thought and the melt-down is getting worse.Christopher Smart wrote a good analysis of Microsoft s latest attempt to extort money from Linux users where they assert patent claims over Android [6]. He points out that .NET/Mono is a risk to Linux.Major Keary wrote a positive review of Snip Burn Solder Shred which is a book about Seriously geeky stuff to make with your kids [7]. Sounds like a fun book.The internal network of RSA has been cracked in some way that apparently weakens the security of SecureID, Bruce Schneier s blog comments section has an interesting discussion of the possibilities [8]. I expect that it s a fairly bad attack, if the attack was minor then surely the RSA people would have told us all the details.Hans Rosling gave an interesting TED talk about The Magic Washing Machine [9]. He describes how his family benefited when his mother first got a washing machine and how this resulted in better education as his mother had more time to get library books for her children. It seems that deploying more electric washing machines should be a priority for improving education and food supplies in third-world countries.Paul Root Wolpe gave an interesting and disturbing TED talk about bio-engineering [10]. He catalogues the various engineered animals and talks about the potential for future developments.Ron Rosenbaum wrote an interesting and insightful article for Slate about Maj. Harold Hering who s military career ended after he asked how to determine whether a nuclear launch order is lawful, legitimate, and comes from a sane president [11]. The question never received a good answer, this is a good reason for moving towards nuclear disarmament and for Americans to vote for the sanest and most intelligent candidate in the presidential elections.Eythor Bender gave an inspiring TED talk about human exoskeletons [12]. He had live demonstrations on stage of a soldier using an exo-skeleton to carry a heavy backpack and a woman who suffered a severe spinal-cord injury walking after being in a wheel-chair for 19 years.

6 March 2011

Stefano Zacchiroli: on the influence of Debian and derivatives

Counting derivatives In the news, there's an article by Bruce Byfield discussing the influence of Debian and its (transitive) derivatives on the ecosystem of GNU/Linux distributions: Linux Leaders: Debian and Ubuntu Derivative Distros. The article is a sort of review of what you can find in the vast ecosystem of distributions rooted at Debian: from embedded to supercomputer distro, from netbook to scientific computing distros. The articles cites the Debian derivatives front desk and is a study similar to what we might tackle with the derivatives census by Paul Wise. (By the way: did you check if your favorite Debian derivative is already in? No? Do it!) With this article, Bruce has made me quite a favor in harvesting distrowatch to refresh the figures about the number of derivatives that I often use in speeches. The need of doing that has been polluting my LaTeX "% TODO" comments for a while now Here they are: Update: update figures that Bruce misinterpreted; live data are available, thanks to Loris (see comments) for noticing

26 February 2011

Ritesh Raj Sarraf: Patents and the Pharmacy Industry

Mosf of us would be well versed with the Patent system in general. We have patents in every sector - Technology, Agriculture, Pharmacy etc. Most readers in our profession must already be well versed of the Pros/Cons of the Patent system in Software/Technology. Patents in software are more like ammunition. The more you have, the stronger you are. We don't seem much cat fight in Software/Technology because not one organization own all the patents for a product. An IBM ThinkPad might be using a cool track pad feature which might be a patent of Dell. Since the finished product comprises of many patents from different owners in the same industry, it is better to play good with each other. You scratch my back, I scratch yours. But what about the Pharmaceutical Industry? There, one single patent could comprise the product. I was reading this document from Bruce Lehman which touches upon Patents and Pharmaceutical Industries, but couldn't find an answer to my question. read more

25 December 2010

Petter Reinholdtsen: The reply from Edgar Villanueva to Microsoft in Peru

A few days ago an article in the Norwegian Computerworld magazine about how version 2.0 of European Interoperability Framework has been successfully lobbied by the proprietary software industry to remove the focus on free software. Nothing very surprising there, given earlier reports on how Microsoft and others have stacked the committees in this work. But I find this very sad. The definition of an open standard from version 1 was very good, and something I believe should be used also in the future, alongside the definition from Digistan. Version 2 have removed the open standard definition from its content. Anyway, the news reminded me of the great reply sent by Dr. Edgar Villanueva, congressman in Peru at the time, to Microsoft as a reply to Microsofts attack on his proposal regarding the use of free software in the public sector in Peru. As the text was not available from a few of the URLs where it used to be available, I copy it here from my source to ensure it is available also in the future. Some background information about that story is available in an article from Linux Journal in 2002.
Lima, 8th of April, 2002
To: Se or JUAN ALBERTO GONZ LEZ
General Manager of Microsoft Per Dear Sir: First of all, I thank you for your letter of March 25, 2002 in which you state the official position of Microsoft relative to Bill Number 1609, Free Software in Public Administration, which is indubitably inspired by the desire for Peru to find a suitable place in the global technological context. In the same spirit, and convinced that we will find the best solutions through an exchange of clear and open ideas, I will take this opportunity to reply to the commentaries included in your letter. While acknowledging that opinions such as yours constitute a significant contribution, it would have been even more worthwhile for me if, rather than formulating objections of a general nature (which we will analyze in detail later) you had gathered solid arguments for the advantages that proprietary software could bring to the Peruvian State, and to its citizens in general, since this would have allowed a more enlightening exchange in respect of each of our positions. With the aim of creating an orderly debate, we will assume that what you call "open source software" is what the Bill defines as "free software", since there exists software for which the source code is distributed together with the program, but which does not fall within the definition established by the Bill; and that what you call "commercial software" is what the Bill defines as "proprietary" or "unfree", given that there exists free software which is sold in the market for a price like any other good or service. It is also necessary to make it clear that the aim of the Bill we are discussing is not directly related to the amount of direct savings that can by made by using free software in state institutions. That is in any case a marginal aggregate value, but in no way is it the chief focus of the Bill. The basic principles which inspire the Bill are linked to the basic guarantees of a state of law, such as:
  • Free access to public information by the citizen.
  • Permanence of public data.
  • Security of the State and citizens.
To guarantee the free access of citizens to public information, it is indispensable that the encoding of data is not tied to a single provider. The use of standard and open formats gives a guarantee of this free access, if necessary through the creation of compatible free software. To guarantee the permanence of public data, it is necessary that the usability and maintenance of the software does not depend on the goodwill of the suppliers, or on the monopoly conditions imposed by them. For this reason the State needs systems the development of which can be guaranteed due to the availability of the source code. To guarantee national security or the security of the State, it is indispensable to be able to rely on systems without elements which allow control from a distance or the undesired transmission of information to third parties. Systems with source code freely accessible to the public are required to allow their inspection by the State itself, by the citizens, and by a large number of independent experts throughout the world. Our proposal brings further security, since the knowledge of the source code will eliminate the growing number of programs with *spy code*. In the same way, our proposal strengthens the security of the citizens, both in their role as legitimate owners of information managed by the state, and in their role as consumers. In this second case, by allowing the growth of a widespread availability of free software not containing *spy code* able to put at risk privacy and individual freedoms. In this sense, the Bill is limited to establishing the conditions under which the state bodies will obtain software in the future, that is, in a way compatible with these basic principles. From reading the Bill it will be clear that once passed:
  • the law does not forbid the production of proprietary software
  • the law does not forbid the sale of proprietary software
  • the law does not specify which concrete software to use
  • the law does not dictate the supplier from whom software will be bought
  • the law does not limit the terms under which a software product can be licensed.
  • What the Bill does express clearly, is that, for software to be acceptable for the state it is not enough that it is technically capable of fulfilling a task, but that further the contractual conditions must satisfy a series of requirements regarding the license, without which the State cannot guarantee the citizen adequate processing of his data, watching over its integrity, confidentiality, and accessibility throughout time, as these are very critical aspects for its normal functioning. We agree, Mr. Gonzalez, that information and communication technology have a significant impact on the quality of life of the citizens (whether it be positive or negative). We surely also agree that the basic values I have pointed out above are fundamental in a democratic state like Peru. So we are very interested to know of any other way of guaranteeing these principles, other than through the use of free software in the terms defined by the Bill. As for the observations you have made, we will now go on to analyze them in detail: Firstly, you point out that: "1. The bill makes it compulsory for all public bodies to use only free software, that is to say open source software, which breaches the principles of equality before the law, that of non-discrimination and the right of free private enterprise, freedom of industry and of contract, protected by the constitution." This understanding is in error. The Bill in no way affects the rights you list; it limits itself entirely to establishing conditions for the use of software on the part of state institutions, without in any way meddling in private sector transactions. It is a well established principle that the State does not enjoy the wide spectrum of contractual freedom of the private sector, as it is limited in its actions precisely by the requirement for transparency of public acts; and in this sense, the preservation of the greater common interest must prevail when legislating on the matter. The Bill protects equality under the law, since no natural or legal person is excluded from the right of offering these goods to the State under the conditions defined in the Bill and without more limitations than those established by the Law of State Contracts and Purchasing (T.U.O. by Supreme Decree No. 012-2001-PCM). The Bill does not introduce any discrimination whatever, since it only establishes *how* the goods have to be provided (which is a state power) and not *who* has to provide them (which would effectively be discriminatory, if restrictions based on national origin, race religion, ideology, sexual preference etc. were imposed). On the contrary, the Bill is decidedly antidiscriminatory. This is so because by defining with no room for doubt the conditions for the provision of software, it prevents state bodies from using software which has a license including discriminatory conditions. It should be obvious from the preceding two paragraphs that the Bill does not harm free private enterprise, since the latter can always choose under what conditions it will produce software; some of these will be acceptable to the State, and others will not be since they contradict the guarantee of the basic principles listed above. This free initiative is of course compatible with the freedom of industry and freedom of contract (in the limited form in which the State can exercise the latter). Any private subject can produce software under the conditions which the State requires, or can refrain from doing so. Nobody is forced to adopt a model of production, but if they wish to provide software to the State, they must provide the mechanisms which guarantee the basic principles, and which are those described in the Bill. By way of an example: nothing in the text of the Bill would prevent your company offering the State bodies an office "suite", under the conditions defined in the Bill and setting the price that you consider satisfactory. If you did not, it would not be due to restrictions imposed by the law, but to business decisions relative to the method of commercializing your products, decisions with which the State is not involved. To continue; you note that:" 2. The bill, by making the use of open source software compulsory, would establish discriminatory and non competitive practices in the contracting and purchasing by public bodies..." This statement is just a reiteration of the previous one, and so the response can be found above. However, let us concern ourselves for a moment with your comment regarding "non-competitive ... practices." Of course, in defining any kind of purchase, the buyer sets conditions which relate to the proposed use of the good or service. From the start, this excludes certain manufacturers from the possibility of competing, but does not exclude them "a priori", but rather based on a series of principles determined by the autonomous will of the purchaser, and so the process takes place in conformance with the law. And in the Bill it is established that *no one* is excluded from competing as far as he guarantees the fulfillment of the basic principles. Furthermore, the Bill *stimulates* competition, since it tends to generate a supply of software with better conditions of usability, and to better existing work, in a model of continuous improvement. On the other hand, the central aspect of competivity is the chance to provide better choices to the consumer. Now, it is impossible to ignore the fact that marketing does not play a neutral role when the product is offered on the market (since accepting the opposite would lead one to suppose that firms' expenses in marketing lack any sense), and that therefore a significant expense under this heading can influence the decisions of the purchaser. This influence of marketing is in large measure reduced by the bill that we are backing, since the choice within the framework proposed is based on the *technical merits* of the product and not on the effort put into commercialization by the producer; in this sense, competitiveness is increased, since the smallest software producer can compete on equal terms with the most powerful corporations. It is necessary to stress that there is no position more anti-competitive than that of the big software producers, which frequently abuse their dominant position, since in innumerable cases they propose as a solution to problems raised by users: "update your software to the new version" (at the user's expense, naturally); furthermore, it is common to find arbitrary cessation of technical help for products, which, in the provider's judgment alone, are "old"; and so, to receive any kind of technical assistance, the user finds himself forced to migrate to new versions (with non-trivial costs, especially as changes in hardware platform are often involved). And as the whole infrastructure is based on proprietary data formats, the user stays "trapped" in the need to continue using products from the same supplier, or to make the huge effort to change to another environment (probably also proprietary). You add: "3. So, by compelling the State to favor a business model based entirely on open source, the bill would only discourage the local and international manufacturing companies, which are the ones which really undertake important expenditures, create a significant number of direct and indirect jobs, as well as contributing to the GNP, as opposed to a model of open source software which tends to have an ever weaker economic impact, since it mainly creates jobs in the service sector." I do not agree with your statement. Partly because of what you yourself point out in paragraph 6 of your letter, regarding the relative weight of services in the context of software use. This contradiction alone would invalidate your position. The service model, adopted by a large number of companies in the software industry, is much larger in economic terms, and with a tendency to increase, than the licensing of programs. On the other hand, the private sector of the economy has the widest possible freedom to choose the economic model which best suits its interests, even if this freedom of choice is often obscured subliminally by the disproportionate expenditure on marketing by the producers of proprietary software. In addition, a reading of your opinion would lead to the conclusion that the State market is crucial and essential for the proprietary software industry, to such a point that the choice made by the State in this bill would completely eliminate the market for these firms. If that is true, we can deduce that the State must be subsidizing the proprietary software industry. In the unlikely event that this were true, the State would have the right to apply the subsidies in the area it considered of greatest social value; it is undeniable, in this improbable hypothesis, that if the State decided to subsidize software, it would have to do so choosing the free over the proprietary, considering its social effect and the rational use of taxpayers money. In respect of the jobs generated by proprietary software in countries like ours, these mainly concern technical tasks of little aggregate value; at the local level, the technicians who provide support for proprietary software produced by transnational companies do not have the possibility of fixing bugs, not necessarily for lack of technical capability or of talent, but because they do not have access to the source code to fix it. With free software one creates more technically qualified employment and a framework of free competence where success is only tied to the ability to offer good technical support and quality of service, one stimulates the market, and one increases the shared fund of knowledge, opening up alternatives to generate services of greater total value and a higher quality level, to the benefit of all involved: producers, service organizations, and consumers. It is a common phenomenon in developing countries that local software industries obtain the majority of their takings in the service sector, or in the creation of "ad hoc" software. Therefore, any negative impact that the application of the Bill might have in this sector will be more than compensated by a growth in demand for services (as long as these are carried out to high quality standards). If the transnational software companies decide not to compete under these new rules of the game, it is likely that they will undergo some decrease in takings in terms of payment for licenses; however, considering that these firms continue to allege that much of the software used by the State has been illegally copied, one can see that the impact will not be very serious. Certainly, in any case their fortune will be determined by market laws, changes in which cannot be avoided; many firms traditionally associated with proprietary software have already set out on the road (supported by copious expense) of providing services associated with free software, which shows that the models are not mutually exclusive. With this bill the State is deciding that it needs to preserve certain fundamental values. And it is deciding this based on its sovereign power, without affecting any of the constitutional guarantees. If these values could be guaranteed without having to choose a particular economic model, the effects of the law would be even more beneficial. In any case, it should be clear that the State does not choose an economic model; if it happens that there only exists one economic model capable of providing software which provides the basic guarantee of these principles, this is because of historical circumstances, not because of an arbitrary choice of a given model. Your letter continues: "4. The bill imposes the use of open source software without considering the dangers that this can bring from the point of view of security, guarantee, and possible violation of the intellectual property rights of third parties." Alluding in an abstract way to "the dangers this can bring", without specifically mentioning a single one of these supposed dangers, shows at the least some lack of knowledge of the topic. So, allow me to enlighten you on these points. On security: National security has already been mentioned in general terms in the initial discussion of the basic principles of the bill. In more specific terms, relative to the security of the software itself, it is well known that all software (whether proprietary or free) contains errors or "bugs" (in programmers' slang). But it is also well known that the bugs in free software are fewer, and are fixed much more quickly, than in proprietary software. It is not in vain that numerous public bodies responsible for the IT security of state systems in developed countries require the use of free software for the same conditions of security and efficiency. What is impossible to prove is that proprietary software is more secure than free, without the public and open inspection of the scientific community and users in general. This demonstration is impossible because the model of proprietary software itself prevents this analysis, so that any guarantee of security is based only on promises of good intentions (biased, by any reckoning) made by the producer itself, or its contractors. It should be remembered that in many cases, the licensing conditions include Non-Disclosure clauses which prevent the user from publicly revealing security flaws found in the licensed proprietary product. In respect of the guarantee: As you know perfectly well, or could find out by reading the "End User License Agreement" of the products you license, in the great majority of cases the guarantees are limited to replacement of the storage medium in case of defects, but in no case is compensation given for direct or indirect damages, loss of profits, etc... If as a result of a security bug in one of your products, not fixed in time by yourselves, an attacker managed to compromise crucial State systems, what guarantees, reparations and compensation would your company make in accordance with your licensing conditions? The guarantees of proprietary software, inasmuch as programs are delivered AS IS'', that is, in the state in which they are, with no additional responsibility of the provider in respect of function, in no way differ from those normal with free software. On Intellectual Property: Questions of intellectual property fall outside the scope of this bill, since they are covered by specific other laws. The model of free software in no way implies ignorance of these laws, and in fact the great majority of free software is covered by copyright. In reality, the inclusion of this question in your observations shows your confusion in respect of the legal framework in which free software is developed. The inclusion of the intellectual property of others in works claimed as one's own is not a practice that has been noted in the free software community; whereas, unfortunately, it has been in the area of proprietary software. As an example, the condemnation by the Commercial Court of Nanterre, France, on 27th September 2001 of Microsoft Corp. to a penalty of 3 million francs in damages and interest, for violation of intellectual property (piracy, to use the unfortunate term that your firm commonly uses in its publicity). You go on to say that: "The bill uses the concept of open source software incorrectly, since it does not necessarily imply that the software is free or of zero cost, and so arrives at mistaken conclusions regarding State savings, with no cost-benefit analysis to validate its position." This observation is wrong; in principle, freedom and lack of cost are orthogonal concepts: there is software which is proprietary and charged for (for example, MS Office), software which is proprietary and free of charge (MS Internet Explorer), software which is free and charged for (Red Hat, SuSE etc GNU/Linux distributions), software which is free and not charged for (Apache, Open Office, Mozilla), and even software which can be licensed in a range of combinations (MySQL). Certainly free software is not necessarily free of charge. And the text of the bill does not state that it has to be so, as you will have noted after reading it. The definitions included in the Bill state clearly *what* should be considered free software, at no point referring to freedom from charges. Although the possibility of savings in payments for proprietary software licenses are mentioned, the foundations of the bill clearly refer to the fundamental guarantees to be preserved and to the stimulus to local technological development. Given that a democratic State must support these principles, it has no other choice than to use software with publicly available source code, and to exchange information only in standard formats. If the State does not use software with these characteristics, it will be weakening basic republican principles. Luckily, free software also implies lower total costs; however, even given the hypothesis (easily disproved) that it was more expensive than proprietary software, the simple existence of an effective free software tool for a particular IT function would oblige the State to use it; not by command of this Bill, but because of the basic principles we enumerated at the start, and which arise from the very essence of the lawful democratic State. You continue: "6. It is wrong to think that Open Source Software is free of charge. Research by the Gartner Group (an important investigator of the technological market recognized at world level) has shown that the cost of purchase of software (operating system and applications) is only 8% of the total cost which firms and institutions take on for a rational and truly beneficial use of the technology. The other 92% consists of: installation costs, enabling, support, maintenance, administration, and down-time." This argument repeats that already given in paragraph 5 and partly contradicts paragraph 3. For the sake of brevity we refer to the comments on those paragraphs. However, allow me to point out that your conclusion is logically false: even if according to Gartner Group the cost of software is on average only 8% of the total cost of use, this does not in any way deny the existence of software which is free of charge, that is, with a licensing cost of zero. In addition, in this paragraph you correctly point out that the service components and losses due to down-time make up the largest part of the total cost of software use, which, as you will note, contradicts your statement regarding the small value of services suggested in paragraph 3. Now the use of free software contributes significantly to reduce the remaining life-cycle costs. This reduction in the costs of installation, support etc. can be noted in several areas: in the first place, the competitive service model of free software, support and maintenance for which can be freely contracted out to a range of suppliers competing on the grounds of quality and low cost. This is true for installation, enabling, and support, and in large part for maintenance. In the second place, due to the reproductive characteristics of the model, maintenance carried out for an application is easily replicable, without incurring large costs (that is, without paying more than once for the same thing) since modifications, if one wishes, can be incorporated in the common fund of knowledge. Thirdly, the huge costs caused by non-functioning software ("blue screens of death", malicious code such as virus, worms, and trojans, exceptions, general protection faults and other well-known problems) are reduced considerably by using more stable software; and it is well known that one of the most notable virtues of free software is its stability. You further state that: "7. One of the arguments behind the bill is the supposed freedom from costs of open-source software, compared with the costs of commercial software, without taking into account the fact that there exist types of volume licensing which can be highly advantageous for the State, as has happened in other countries." I have already pointed out that what is in question is not the cost of the software but the principles of freedom of information, accessibility, and security. These arguments have been covered extensively in the preceding paragraphs to which I would refer you. On the other hand, there certainly exist types of volume licensing (although unfortunately proprietary software does not satisfy the basic principles). But as you correctly pointed out in the immediately preceding paragraph of your letter, they only manage to reduce the impact of a component which makes up no more than 8% of the total. You continue: "8. In addition, the alternative adopted by the bill (I) is clearly more expensive, due to the high costs of software migration, and (II) puts at risk compatibility and interoperability of the IT platforms within the State, and between the State and the private sector, given the hundreds of versions of open source software on the market." Let us analyze your statement in two parts. Your first argument, that migration implies high costs, is in reality an argument in favor of the Bill. Because the more time goes by, the more difficult migration to another technology will become; and at the same time, the security risks associated with proprietary software will continue to increase. In this way, the use of proprietary systems and formats will make the State ever more dependent on specific suppliers. Once a policy of using free software has been established (which certainly, does imply some cost) then on the contrary migration from one system to another becomes very simple, since all data is stored in open formats. On the other hand, migration to an open software context implies no more costs than migration between two different proprietary software contexts, which invalidates your argument completely. The second argument refers to "problems in interoperability of the IT platforms within the State, and between the State and the private sector" This statement implies a certain lack of knowledge of the way in which free software is built, which does not maximize the dependence of the user on a particular platform, as normally happens in the realm of proprietary software. Even when there are multiple free software distributions, and numerous programs which can be used for the same function, interoperability is guaranteed as much by the use of standard formats, as required by the bill, as by the possibility of creating interoperable software given the availability of the source code. You then say that: "9. The majority of open source code does not offer adequate levels of service nor the guarantee from recognized manufacturers of high productivity on the part of the users, which has led various public organizations to retract their decision to go with an open source software solution and to use commercial software in its place." This observation is without foundation. In respect of the guarantee, your argument was rebutted in the response to paragraph 4. In respect of support services, it is possible to use free software without them (just as also happens with proprietary software), but anyone who does need them can obtain support separately, whether from local firms or from international corporations, again just as in the case of proprietary software. On the other hand, it would contribute greatly to our analysis if you could inform us about free software projects *established* in public bodies which have already been abandoned in favor of proprietary software. We know of a good number of cases where the opposite has taken place, but not know of any where what you describe has taken place. You continue by observing that: "10. The bill discourages the creativity of the Peruvian software industry, which invoices 40 million US$/year, exports 4 million US$ (10th in ranking among non-traditional exports, more than handicrafts) and is a source of highly qualified employment. With a law that encourages the use of open source, software programmers lose their intellectual property rights and their main source of payment." It is clear enough that nobody is forced to commercialize their code as free software. The only thing to take into account is that if it is not free software, it cannot be sold to the public sector. This is not in any case the main market for the national software industry. We covered some questions referring to the influence of the Bill on the generation of employment which would be both highly technically qualified and in better conditions for competition above, so it seems unnecessary to insist on this point. What follows in your statement is incorrect. On the one hand, no author of free software loses his intellectual property rights, unless he expressly wishes to place his work in the public domain. The free software movement has always been very respectful of intellectual property, and has generated widespread public recognition of its authors. Names like those of Richard Stallman, Linus Torvalds, Guido van Rossum, Larry Wall, Miguel de Icaza, Andrew Tridgell, Theo de Raadt, Andrea Arcangeli, Bruce Perens, Darren Reed, Alan Cox, Eric Raymond, and many others, are recognized world-wide for their contributions to the development of software that is used today by millions of people throughout the world. On the other hand, to say that the rewards for authors rights make up the main source of payment of Peruvian programmers is in any case a guess, in particular since there is no proof to this effect, nor a demonstration of how the use of free software by the State would influence these payments. You go on to say that: "11. Open source software, since it can be distributed without charge, does not allow the generation of income for its developers through exports. In this way, the multiplier effect of the sale of software to other countries is weakened, and so in turn is the growth of the industry, while Government rules ought on the contrary to stimulate local industry." This statement shows once again complete ignorance of the mechanisms of and market for free software. It tries to claim that the market of sale of non- exclusive rights for use (sale of licenses) is the only possible one for the software industry, when you yourself pointed out several paragraphs above that it is not even the most important one. The incentives that the bill offers for the growth of a supply of better qualified professionals, together with the increase in experience that working on a large scale with free software within the State will bring for Peruvian technicians, will place them in a highly competitive position to offer their services abroad. You then state that: "12. In the Forum, the use of open source software in education was discussed, without mentioning the complete collapse of this initiative in a country like Mexico, where precisely the State employees who founded the project now state that open source software did not make it possible to offer a learning experience to pupils in the schools, did not take into account the capability at a national level to give adequate support to the platform, and that the software did not and does not allow for the levels of platform integration that now exist in schools." In fact Mexico has gone into reverse with the Red Escolar (Schools Network) project. This is due precisely to the fact that the driving forces behind the Mexican project used license costs as their main argument, instead of the other reasons specified in our project, which are far more essential. Because of this conceptual mistake, and as a result of the lack of effective support from the SEP (Secretary of State for Public Education), the assumption was made that to implant free software in schools it would be enough to drop their software budget and send them a CD ROM with Gnu/Linux instead. Of course this failed, and it couldn't have been otherwise, just as school laboratories fail when they use proprietary software and have no budget for implementation and maintenance. That's exactly why our bill is not limited to making the use of free software mandatory, but recognizes the need to create a viable migration plan, in which the State undertakes the technical transition in an orderly way in order to then enjoy the advantages of free software. You end with a rhetorical question: "13. If open source software satisfies all the requirements of State bodies, why do you need a law to adopt it? Shouldn't it be the market which decides freely which products give most benefits or value?" We agree that in the private sector of the economy, it must be the market that decides which products to use, and no state interference is permissible there. However, in the case of the public sector, the reasoning is not the same: as we have already established, the state archives, handles, and transmits information which does not belong to it, but which is entrusted to it by citizens, who have no alternative under the rule of law. As a counterpart to this legal requirement, the State must take extreme measures to safeguard the integrity, confidentiality, and accessibility of this information. The use of proprietary software raises serious doubts as to whether these requirements can be fulfilled, lacks conclusive evidence in this respect, and so is not suitable for use in the public sector. The need for a law is based, firstly, on the realization of the fundamental principles listed above in the specific area of software; secondly, on the fact that the State is not an ideal homogeneous entity, but made up of multiple bodies with varying degrees of autonomy in decision making. Given that it is inappropriate to use proprietary software, the fact of establishing these rules in law will prevent the personal discretion of any state employee from putting at risk the information which belongs to citizens. And above all, because it constitutes an up-to-date reaffirmation in relation to the means of management and communication of information used today, it is based on the republican principle of openness to the public. In conformance with this universally accepted principle, the citizen has the right to know all information held by the State and not covered by well- founded declarations of secrecy based on law. Now, software deals with information and is itself information. Information in a special form, capable of being interpreted by a machine in order to execute actions, but crucial information all the same because the citizen has a legitimate right to know, for example, how his vote is computed or his taxes calculated. And for that he must have free access to the source code and be able to prove to his satisfaction the programs used for electoral computations or calculation of his taxes. I wish you the greatest respect, and would like to repeat that my office will always be open for you to expound your point of view to whatever level of detail you consider suitable. Cordially,
    DR. EDGAR DAVID VILLANUEVA NU EZ
    Congressman of the Republic of Per .

    3 December 2010

    Russell Coker: Aspie Social Skills and the Free Software Community

    LWN has an article by Valerie Aurora titled The dark side of open source conferences [1] which is about sexual harassment and sexual assault at Free Software conferences. Apparently some conferences create such a bad environment that some people won t attend, it s a well researched article that everyone in the community should read.The Autism DerailmentThe comments have the usual mix of insight, foolishness, and derailment that you expect from such discussions. One derailment thread that annoyed me is the discussion about men on the Autism Spectrum started by Joe Buck [2]. Joe seems to believe that the 1% of males on the Autism Spectrum (and something greater than 1% but a lot less than 50% in the Free Software community) are a serious part of the problem because they supposedly hit on women who aren t interested in them in spite of the fact that the article in question is about women who are being insulted, harassed, and groped at at open source conferences . The article had no mention of men who try to chat up women presumably this was a deliberate decision to focus on sexual assault and harassment rather than what Joe wanted to talk about.In response Mackenzie made the following insightful point:I don t think any autistic person who is high-functioning enough to A) contribute to open source B) want to be at an event with so many people and C) carry on any sort of conversation is low-functioning enough not to understand stop or no. If you can understand your patch has been rejected, you can likely understand don t do that again. Understanding how Other People FeelBruce Perens claimed What they [Aspies] don t understand is how the other person in the situation feels . Like many (possibly most) people Bruce doesn t seem to get the fact that no-one can really understand how other people feel. The best logical analysis of this seems to be the Changing Emotions article on Less Wrong [3]. While Less Wrong deals with Male to Female conversion as the example (which may be relevant to the discussion about the treatment of women) the same logic also applies to smaller changes. Anyone who even thinks that if they would always be able understand how their identical twin felt (if they had one) probably hasn t considered these issues much. As an aside, having a psychologist diagnose you as being on the Autism Spectrum and therefore by implication thinking differently to 99% of the population really makes you consider the ways in which other people might have different thought processes and experiences.Every time we have a discussion about issues related to sexism in the Free Software community we get a lot of documented evidence that there are many people who are apparently neuro-typical (IE not Autistic) who don t understand how other people think in many cases they go so far as to tell other people what their emotional state should be.What Really HappensNix said However, in that situation our natural reflex is to *get out of there*, not to jump on women like some sort of slobbering caveman which is a really good summary.In more detail, I think that the vast majority of guys who are on the Autism Spectrum and who are able to do things like attend computer conferences (*) realise that chatting up a random girl that they meet is something that just isn t going to work out. Generally people don t attempt things that they expect to fail so I don t think that Autistic guys are going to be hitting on girls at conferences.(*) Having never met any Autistic people who aren t capable of attending such conferences I can t speak for them. I really doubt that the Low Functioning Autistic guys are as much of a problem as some people claim, but lack evidence. In any case the actions of people who don t attend conferences aren t relevant to a discussion about things that happen at conferences.Update: It Keeps GoingDion claims that the misogyny at conferences is due to socially inept people, he also casually switches between discussing people who misunderstand when someone is flirting and people who hire almost-naked booth-babes (two very different classes of action) [4]. Several people asked for supporting evidence, naturally none was provided.In response njs posted a link to Marissa Lingen s blog post Don t blame autism, dammit [5]. Marissa points out that people who offend other people due to lacking social skills will tend to do so in times and places that are likely to get a bad reaction if you don t know that you are doing something wrong then there s no reason to hide it. If someone offends a senior manager at a corporate event then it could be because they are on the Autism Spectrum (I ve apparently done that). If someone offends junior people at a times and places where there are no witnesses but is always nice to managers and other powerful people then it s not related to Autism.One final note, I have little tolerance for anyone who claims to be an Aspie when they do something wrong. You are either on the Autism Spectrum all the time or none of it. Anyone who wants any sympathy for me for an occasion where they stuffed up due to being an Aspie can start by making a clear statement about where they are on the Autism Spectrum.Update2: Yet More from Bruce PerensBruce wrote IMO, the kind of men who go in to software engineering suffer a lack of healthy interaction with women who are their peers, and it may be that the high incidence of empathy disorders in our field is involved (which seems to be part of the inspiration for Joe Buck later in that thread) and now claims Nobody here was trying to connect Asperger s or autism with the touching incidents or violent crime .Matthew Garrett responded to that with If you weren t trying to say that the high incidence of empathy disorders in our field was related to a lack of healthy interaction with women who are their peers, and that that has something to do with incidents of sexual harassment or assault at conferences, what were you trying to say? Because that sounds awfully like We wouldn t have so many problems if it weren t for all the autists .Bruce s latest comment is If you choose to read something that nasty into my writing, that s your problem. Get therapy .Through this discussion I ve been unsure of whether to interpret the statements by Bruce et al the way Matthew does or whether I should consider them as merely a desperate attempt to derail the discussion. I can t imagine any possible way of interpreting such comments in connection with the discussion of sexual assault as anything other than either trivialising violent crimes against women (suggesting that they are no worse than asking out someone who s not interested) or claiming that anyone who lacks social skills should be treated as a violent sexual predator. It s just not reasonable to believe that every single person who wrote such comments referring to Autism was misunderstood and really meant something nice.As a general rule I don t think that it s the responsibility of other people to try and find a non-offensive interpretation of something that one might say. I don t think that all the people who strongly disagree with the most obvious and reasonable interpretations of Bruce s comments should get therapy. I think that Bruce should explain what he means clearly.

    2 December 2010

    Theodore Ts'o: Close the Washington Monument

    Bruce Schneier has written an absolutely powerful essay in his blog, with the modest proposal that in response to the security worries at the Washington Monument, we should close it. If you haven t read it yet, run, don t walk, to his blog and read it. Then if you live in the States, write to your congresscritters, and ask them to reinsert the backbone which they have placed in a blind trust when they got elected, and tell the TSA that they have a new mandate; to provide as much security as possible without compromising our freedom, privacy, and American Ideals. Right now, they have an impossible job, because they have been asked to provide an absolute degree of security. And in trying to provide the impossible, the terrorists have already won No related posts.

    18 November 2010

    John Goerzen: The TSA: Stupid, Owned, or Complicit?

    I have long been in Bruce Schneier s camp, thinking that the TSA is a joke: nothing but security theater. A few recent examples come to mind: I don t get it. They have been completely reactionary since they began. They have a complete failure of institutional imagination. Something happens, and then a new rule comes out to prevent the thing that everybody is now expecting. And what happens about the thing that people aren t expecting yet? Nothing. So we now have to take off our shoes because one guy tried to use them for something nefarious. OK, fine, but the next guy is probably going to try something other than shoes. Which is why, I m sure, many people are pointing out that the TSA is over-reliant on technology and device detection and completely underemphasizing evildoer detection as, we are repeatedly reminded, the Israelis excel at. The TSA s attempt to remedy that was foolish at best, and, according to a recent report, not grounded in science. Which is why I am heartened that, almost a decade after 9/11, Americans are starting to let go of their fear and be ready to reclaim some sense of intelligence at the security line. The fact that politicians think there is something to be gained by being tough on TSA s invasive screening procedures, rather than risk looking soft on terrorism, is evidence of this. So, what I haven t yet worked out is this: What gives, TSA? Are they: (Note: this criticism is directed mostly at the upper levels of TSA management; I do not believe the people most of us see have the ability to change the system, even if they wanted to.) One final word: I also get annoyed at all the people that grouse at the TSA checking 80-year-olds as thoroughly as everyone else. An 80-year-old could be wearing a hidden device just as much as anyone else could, and if we don t check them, then someday they probably will. The key is to be smart about who we check carefully. Use data, behavioral analysis, simple questioning, etc it works, and is a lot better than exempting people under 13 and over 80 from screening on arbitrary grounds. Also, it might help anyone with a blurry groin. And it might just save a bunch of us from getting cancer.

    14 October 2010

    Russell Coker: Links October 2010

    Bruce Schneier wrote an insightful post about why designing products for wiretapping is a bad idea [1]. It seems that large parts of the Internet will be easy to tap (for both governments and criminals) in the near future unless something is done. The bad results of criminal use will outweigh any benefits of government use. Sam Watkins wrote an informative post about Android security [2]. Among other things any application can read all stored data including all photos, that s got to be a problem for anyone who photographs themself naked Rebecca Saxe gave an interesting TED talk about how brains make moral judgements [3]. Somehow she managed to speak about the Theory of Mind without mentioning Autism once. The Guardian has an amusing article by Cory Doctorow about security policies in banks [4]. He advocates promoting statistical literacy (or at least not promoting a lack of it) as a sound government policy. He also suggests allowing regulators to fine banks that get it wrong. Steven Johnson gave an interesting TED talk about Where Good Ideas Come From [5]. It s a bit slow at the start but gets good at the end. Adam Grosser gave an interesting TED talk about a fridge that was designed for use in Africa [6]. The core of the Absorption Refrigerator is designed to be heated in a pot of water in a cooking fire and it can then keep food cool for 12 hours. It s a pity that they couldn t design it to work on solar power to avoid the fuel use for the cooking fire. Josh Silver gave an interesting TED talk about liquid filled spectacles [7]. The glasses are shipped with a syringe filled with liquid at each side that is used to inflate the lenses to the desired refractive index. The wearer can just adjust the syringes until they get to the right magnification, as there are separate syringes the glasses work well with people who s eyes aren t identical (which is most people). Once the syringes are at the right spots the user can tighten some screws to prevent further transfer of liquid and cut the syringes off to give glasses that aren t overly heavy but which can t be adjusted any more, I guess that a natural extension to this would be to allow the syringes to be re-attached so that the user could adjust them every year to match declining vision. One thing that this wouldn t do is counter for Astigmatism (where the lens of the eye doesn t focus light to a point), but I guess they could make lenses to deal with a few common varieties of Astigmatism so that most people who have that problem can get a reasonable approximation. The current best effort is to make the glasses cost $19, which is 19 days salary for some of the poorest people in the world. Glasses in Australia cost up to $650 for a pair (or a more common cost of $200 or about $100 after Medicare) which would be about one day s salary. Eben Bayer gave an inspiring TED talk about one of the ways that mushrooms can save the planet [8]. He has designed molds that can be filled with Pasteurised organic waste (seed husks etc) and then seeded with fungal spores. The fungus then grows mycelium (thin fungal root fibers) through the organic waste making it into a solid structure which fits the shape of the mold. This is currently being used to replace poly-styrene foam for packaging and can apparently be used for making tiles that are fire retardant and sound proof for constructing buildings. The main benefits of the material are that it can be cheaply made without petrochemicals and that it is bio-degradable, I m not sure how the bio-degradable part would work with constructing buildings maybe they would just replace the panels every few years. Annie Lennox gave a TED talk about her Sing foundation to care for women and children who are affected by AIDS [9]. She describes the effect of AIDS in Africa as Genocide. Robert Ballard gave a very informative TED talk about exploring the oceans [10]. This was one of the most informative TED talks I ve seen and Robert is also one of the most enthusiastic speakers I ve seen, it s really worth watching! We really need more money spent on exploring the oceans. Jessa Gamble gave an interesting TED talk which suggests that the best thing to do is to go to bed at about sunset and then have a couple of hours of relaxing time during the middle of the night [11]. Apparently the subjects of body-block experiments who live for a month in a bunker without natural light or access to a clock get better sleep in this manner than they ever had in their life and feel fully awake for the first time. World Changing is a blog that has a lot of interesting articles about climate change and related issues [12]. It s worth a read. Cynthia Schneider gave an interesting TED talk about how reality competition TV is affecting reality [13]. Shows that are derived from the American Idol concept are driving a resurgence in some traditional forms of performance art while also promoting equality among other things it s apparent that winning is more important than misogyny. The Ritual of the Calling of an Engineer is an interesting concept [14]. I think it would be good to have something similar for Computer Science. Benjamin Mako Hill wrote an interesting and insightful essay about Piracy and Free Software [15].

    6 October 2010

    Russ Allbery: Review: Thinking in Java

    Review: Thinking in Java, by Bruce Eckel
    Publisher: Prentice Hall
    Copyright: 2006
    Printing: October 2009
    ISBN: 0-13-187248-6
    Format: Trade paperback
    Pages: 1461
    Thinking in Java has gone through multiple editions. This review is of the 4th edition, which is targetted at Java 5 (with some minor updates for Java 6, released as it was going to press). Prior to reading this book, I'd never written code from scratch in Java and had never learned anything about Java in a structured fashion. My CS degrees come before the era of ubiquitous Java; the most I'd ever done with the language was look over and make some minor fixes to code other people had written, using general programming knowledge. I choose this book as my introduction because I was looking for a ground-up introduction with good reviews and was avoiding books that contained a printed version of the Java library documentation. I can get that myself on-line. I kept a set of notes on my impressions of the Java language itself separately and will not get into that here. This is, instead, a review of Thinking in Java as a book and as a training tool for someone who wants to pick up the language. First off, this is a remarkably well-titled book, much more so than the typical programming language book. Eckel is not just teaching Java as one programming language among many. He tries hard to teach the reader to think in Java, with a lot of attention to idiomatic use of the language and natural ways to use its expressiveness. This is exactly what I was looking for I don't want to just write C or Perl in Java syntax and I was very satisfied with the result. The book is also very heavy on source code examples; this is much of the reason why it's 1,461 pages. Eckel illustrates everything with Java code, generally complete functioning short programs. Even exploratory musings and experiments are implemented as Java code. I don't think there's any significant thought in this book that isn't shown in both text and in explained Java code. Eckel has also clearly put significant effort into choosing good examples and designing his sample programs. I have some past (mild) experience with object-oriented design, and a lot of experience with higher-level system design, and I was satisfied and occasionally intrigued by the object structures and design tips that Eckel provides. He uses design pattern language heavily, but not in a confusing way, and he takes some time to explain typical design patterns and illustrate them repeatedly. This is a solid introduction to good object-oriented program design as well as an introduction to Java. One oddity of this book, though, is that Eckel introduces the language from the ground up, starting with syntax and basic types, and is very careful not to use any concepts before he introduces them. The result is a very thorough introduction to the Java language, but it means that it takes 900 pages (!) before Eckel introduces I/O and the possibility of writing a useful program based on just the information in the book. This is for entirely sound conceptual reasons: I/O in Java relies on lots of concepts, like exceptions and generics, that require extensive introduction and discussion. However, it does mean that through well more than half of a monumental tome the reader is learning the language through rather artificial programs. I have extensive prior programming background and a high tolerance for more abstract examples (I have a master's degree in software theory), so I could deal with this, but even with that background I was starting to feel like my head was getting too full and too crowded with concepts without being able to make them concrete. It probably would have helped to work through some of the exercises, but I dislike exercises that aren't grounded in some problem I personally want to solve. If you're someone who only learns languages through practical application, this approach to teaching the language may drive you nuts, and you may want to look for another book that compromises conceptual progression to get you started faster. Related to this, Thinking in Java is not about the standard library. It's about the language, and the vast majority of this book is focused directly on the language level: inheritance, information hiding, generics, exceptions, introspection, concurrency, annotations, serialization, and so forth. The only parts of the standard library that Eckel covers in depth are the container classes, I/O, and a somewhat odd and misplaced final chapter on Swing. There is nothing about databases, essentially nothing about web applications or containers, no network programming, and very little in the way of practical, applied problems. As you might guess from the combination of that limitation of scope and the length of this book, that means it is extremely comprehensive; I doubt there are many corners of the basic language semantics, including around generics, that Eckel leaves unexplored. This is exactly what I wanted (I know how to read library documentation myself and all language core libraries start looking mostly the same after you've programmed in five or six languages), but it's very different from the typical approach of a programming language book. Don't be surprised. This biggest problem with Thinking in Java from my perspective is that it defers many interesting topics to Eckel's web pages and to free companion books and material that is supposedly found there, to the point of advertising that on the cover. This is a good way to allow the reader to explore extra topics that are of specific interest while skipping others, provided that one actually delivers on that promise. However, Eckel's web site is dire. The first time I looked at it earlier this year I was unable to find most of the supplemental material the book promises is there, and now www.mindview.net doesn't resolve in DNS. I've found copies of some of that material hosted elsewhere in Google searches, but this is a rather frustrating experience if one was relying on that supplemental material to fill in the gaps. And the book is clearly written expecting people to go look at the web materials; Eckel refers to them probably a hundred times over the course of the book. Less of a serious problem, and more of a quirk, is the closing chapter on Swing. This chapter really doesn't fit much with the book. I suppose Eckel felt obligated to put in at least one chapter on some concrete applied topic (I personally would have preferred databases be that topic, but oh well), and it does serve a useful purpose in showing how concurrency, inner classes, inheritance, and introspection all work together in a significant Java library. But it's not presented as just an example, and it fits in oddly with the rest of the book. It also has an inexplicable middle section about Flash programming. I ended up mostly skimming this. But this isn't much of a flaw in the book; if one ignores 100 pages out of 1,450, that's not much of a loss. Eckel comes from a C++ background (he was a founding member of the C++ standardization committee), which shows up most prominantly in the generics chapter. It's a solid, thorough introduction to Java generics, but it also has an extensive exploration of all the problems with the Java erasure model and many, many examples of the things that one would find intuitive from C++ that don't work the same way in Java. If you come from a C++ background, this is quite possibly useful; if, like me, you don't care about C++, it's at best mildly interesting and gets a touch tedious. There are a few other places where Eckel pokes surprisingly sharp sticks at the Java designers, but taken in context I think this is a quirk of the book's conversational style (which is otherwise a feature). Eckel's love of Python is also somewhat entertaining (particularly the long advertisement for Python in the introductory chapters of a book on Java), but has little impact on the book. I think the largest drawback of it is that all the examples in this book are tested using a Python framework, which isn't so much a problem with them as a lost opportunity. They're annotated with metadata used by Eckel's Python test framework, and in a Java book I would have been much more interested in a good Java test framework built on top of something like JUnit. Oh well. Overall, this is a very solid book which, despite being enormous, does not feel pointlessly padded. It's thorough and exhaustive on its topic, but there's a point to nearly everything in here. It was also exactly the Java book I wanted: an introduction to the language, built on a firm foundation of object-oriented program design, with discussion and pointers on how to think idiomatically in the language and use it to its fullest potential, and without mindlessly reproducing the standard library documentation. If that's also what you're looking for in a Java book, I recommend it highly. Just know going in what sort of book you're getting, since its pacing and order of presentation may frustrate some learning styles. Rating: 8 out of 10

    8 March 2010

    Russell Coker: Designing a Secure Linux System

    The Threat Bruce Schneier s blog post about the Mariposa Botnet has an interesting discussion in the comments about how to make a secure system [1]. Note that the threat is considered to be remote attackers, that means viruses and trojan horses which includes infected files run from USB devices (IE you aren t safe just because you aren t on the Internet). The threat we are considering is not people who can replace hardware in the computer (people who have physical access to it which includes people who have access to where it is located or who are employed to repair it). This is the most common case, the risk involved in stealing a typical PC is far greater than the whatever benefit might be obtained from the data on it a typical computer user is at risk of theft only for the resale value of a second-hand computer. So the question is, how do can we most effectively use free software to protect against such threats? The first restriction is that the hardware in common use is cheap and has little special functionality for security. Systems that have a TPM seem unlikely to provide a useful benefit due to the TPM being designed more for Digital Restrictions Management than for protecting the user and due to TPM not being widely enough used. The BIOS and the Bootloader It seems that the first thing that is needed is a BIOS that is reliable. If an attacker manages to replace the BIOS then it could do exciting things like modifying the code of the kernel at boot time. It seems quite plausible for the real-mode boot loader code to be run in a VM86 session and to then have it s memory modified before it starts switches to protected mode. Every BIOS update is a potential attack. Coreboot replaces the default PC BIOS, it initialises the basic hardware and then executes an OS kernel or boot loader [2] (the Coreboot Wikipedia page has a good summary). The hardest part of the system startup process is initialising the hardware, Coreboot has that solved for 213 different motherboards. If engineers were allowed to freely design hardware without interference then probably a significant portion of the computers in the market would have a little switch to disable the write line for the flash BIOS. I heard a rumor that in the days of 286 systems a vendor of a secure OS shipped a scalpel to disable the hardware ability to leave protected mode, cutting a track on the motherboard is probably still an option. Usually once a system is working you don t want to upgrade the BIOS. One of the payloads for Coreboot is GRUB. The Grub Feature Requests page has as it s first entry Option to check signatures of the bootchain up to the cryptsetup/luksOpen: MBR, grub partition, kernel, initramfs [3]. Presumably this would allow a GPG signature to be checked so that a kernel and initrd would only be used if they came from a known good source. With this feature we could only boot a known good kernel. How to run User Space The next issue is how to run the user-space. There has been no shortage of Linux kernel exploits and I think it s reasonable to assume that there will continue to be a large number of exploits. Some of the kernel flaws will be known by the bad guys for some time before there are patches, some of them will have patches which don t get applied as quickly as desired. I think we have to assume that the Linux kernel will be compromised. Therefore the regular user applications can t be run against a kernel that has direct hardware access. It seems to me that the best way to go is to have the Linux kernel run in a virtual environment such as Xen or KVM. That means you have a hypervisor (Xen+Linux or Linux+KVM+QEMU) that controls the hardware and creates the environment for the OS image that the user interacts with. The hypervisor could create multiple virtual machines for different levels of data in a similar manner to the NSA NetTop project, not that this is really a required part of solving the general secure Internet terminal problem but as it would be a tiny bit of extra work you might as well do it. One problem with using a hypervisor is that the video hardware tends to want to use features such as bus-mastering to give best performance. Apparently KVM has IOMMU support so it should be possible to grant a virtual machine enough hardware access to run 3D graphics at full speed without allowing it to break free. Maintaining the Virtual Machine Image Google has a good design for the ChromiumOS in terms of security [4]. They are using CGroups [5] to control access to device nodes in jails, RAM, CPU time, and other resources. They also have some intrusion detection which can prompt a user to perform a hardware reset. Some of the features would need to be implemented in a different manner for a full desktop system but most of the Google design features would work well. For an OS running in a virtual machine when an intrusion is detected it would be best to have the hypervisor receive a message by some defined interface (maybe a line of text printed on the console ) and then terminate and restart the virtual machine. Dumping the entire address space of the virtual machine would be a good idea too, with typical RAM sizes at around 4G for laptops and desktops and typical storage sizes at around 200G for laptops and 2T for new desktops it should be easy to store a few dumps in case they are needed. The amount of data received by a typical ADSL link is not that great. Apart from the occasional big thing (like downloading a movie or listening to Internet radio for a long time) most data transfers are from casual web browsing which doesn t involve that much data. A hypervisor could potentially store the last few gigabytes of data that were received which would then permit forensic analysis if the virtual machine was believed to be compromised. With cheap SATA disks in excess of 1TB it would be conceivable to store the last few years of data transfer (with downloaded movies excluded) but such long-term storage would probably involve risks that would outweigh the rewards, probably storing no more than 24 hours of data would be best. Finally in terms of applying updates and installing new software the only way to do this would be via the hypervisor as you don t want any part of the virtual machine to be able to write to it s data files or programs. So if the user selects to install a new application then the request please install application X would have to be passed to the hypervisor. After the application is installed a reboot of the virtual machine would be needed to apply the change. This is a common experience for mobile phones (where you even have to reboot if the telco changes some of their network settings) and it s something that MS-Windows users have become used to but it would get a negative reaction from the more skilled Linux users. Would this be Accepted? The question is, if we built this would people want to use it? The NetTop functionality of having two OSs interchangeable on the one desktop would attract some people. But most users don t desire greater security and would find some reason to avoid this. They would claim that it lowered the performance (even for aspects of performance where benchmarks revealed no difference) and claim that they don t need it. At this time it seems that computer security isn t regarded as a big enough problem for users. It seems that the same people who will avoid catching a train because one mugging made it to the TV news will happily keep using insecure computers in spite of the huge number of cases of fraud that are reported all the time.

    Next.

    Previous.